Along with the widespread use of face recognition systems, their vulnerability has become highlighted. While existing face anti-spoofing methods can be generalized between attack types, generic solutions are still challenging due to the diversity of spoof characteristics. Recently, the spoof trace disentanglement framework has shown great potential for coping with both seen and unseen spoof scenarios, but the performance is largely restricted by the single-modal input. This paper focuses on this issue and presents a multi-modal disentanglement model which targetedly learns polysemantic spoof traces for more accurate and robust generic attack detection. In particular, based on the adversarial learning mechanism, a two-stream disentangling network is designed to estimate spoof patterns from the RGB and depth inputs, respectively. In this case, it captures complementary spoofing clues inhering in different attacks. Furthermore, a fusion module is exploited, which recalibrates both representations at multiple stages to promote the disentanglement in each individual modality. It then performs cross-modality aggregation to deliver a more comprehensive spoof trace representation for prediction. Extensive evaluations are conducted on multiple benchmarks, demonstrating that learning polysemantic spoof traces favorably contributes to anti-spoofing with more perceptible and interpretable results.
translated by 谷歌翻译
This work presents Time-reversal Equivariant Neural Network (TENN) framework. With TENN, the time-reversal symmetry is considered in the equivariant neural network (ENN), which generalizes the ENN to consider physical quantities related to time-reversal symmetry such as spin and velocity of atoms. TENN-e3, as the time-reversal-extension of E(3) equivariant neural network, is developed to keep the Time-reversal E(3) equivariant with consideration of whether to include the spin-orbit effect for both collinear and non-collinear magnetic moments situations for magnetic material. TENN-e3 can construct spin neural network potential and the Hamiltonian of magnetic material from ab-initio calculations. Time-reversal-E(3)-equivariant convolutions for interactions of spinor and geometric tensors are employed in TENN-e3. Compared to the popular ENN, TENN-e3 can describe the complex spin-lattice coupling with high accuracy and keep time-reversal symmetry which is not preserved in the existing E(3)-equivariant model. Also, the Hamiltonian of magnetic material with time-reversal symmetry can be built with TENN-e3. TENN paves a new way to spin-lattice dynamics simulations over long-time scales and electronic structure calculations of large-scale magnetic materials.
translated by 谷歌翻译
学习准确的深度对于多视图3D对象检测至关重要。最近的方法主要是从单眼图像中学习深度,由于单眼深度学习的性质不足,这会面临固有的困难。在这项工作中,我们提出了一种新颖的环绕时间立体声(STS)技术,而不是使用唯一的单眼深度方法,而是利用跨时间之间的几何对应关系来促进准确的深度学习。具体而言,我们将自我车辆周围所有相机的视野视为统一的视图,即环绕浏览量,并在其上进行暂时立体声匹配。利用与STS不同框架之间的几何对应关系并与单眼深度结合在一起,以产生最终的深度预测。关于Nuscenes的综合实验表明,STS极大地提高了3D检测能力,特别是对于中距离和长距离对象。在带有RESNET-50骨架的BEVDEPTH上,STS分别提高了MAP和NDS,分别提高了2.6%和1.4%。当使用较大的主链和较大的图像分辨率时,观察到一致的改进,证明了其有效性
translated by 谷歌翻译
基于决策树(DT)的分类和回归思想,最近提议在总体分类和回归任务中提供更高的性能。以更高的计算复杂性为代价,达到了其性能的改进。在这项工作中,我们研究了两种加速SLM的方法。首先,我们采用粒子群优化(PSO)算法来加快对当前尺寸的线性组合表示的判别尺寸的搜索。线性组合中最佳权重的搜索在计算上很重。它是通过原始SLM中的概率搜索来完成的。 PSO的SLM加速需要减少10-20倍的迭代。其次,我们利用SLM实施中的并行处理。实验结果表明,加速的SLM方法在训练时间中达到577的速度系数,同时保持原始SLM的可比分类/回归性能。
translated by 谷歌翻译
预训练模型已在许多代码智能任务中有效。这些模型在大规模未标记的语料库中进行了预训练,然后在下游任务中进行了微调。但是,由于预训练和下游任务的输入是不同的形式,因此很难充分探索预训练模型的知识。此外,微调的性能强烈依赖于下游数据的量,而实际上,具有稀缺数据的场景很常见。自然语言处理(NLP)领域的最新研究表明,迅速调整,一种调整的新范式,减轻上述问题并在各种NLP任务中实现了有希望的结果。在迅速调整中,在调整过程中插入的提示提供了特定于任务的知识,这对于具有相对较少数据的任务特别有益。在本文中,我们凭经验评估了代码智能任务中迅速调整的用法和效果。我们对流行的预训练模型Codebert和codet5进行及时调整,并尝试三个代码智能任务,包括缺陷预测,代码摘要和代码翻译。我们的实验结果表明,在所有三个任务中,迅速调整始终优于微调。此外,及时调整在低资源场景中显示出很大的潜力,例如,对于代码摘要,平均将微调的BLEU分数提高了26%以上。我们的结果表明,我们可以调整代码智能任务的迅速调整,以实现更好的性能,尤其是在缺乏特定于任务的数据时,我们可以调整及时调整。
translated by 谷歌翻译
揭开多个机场之间的延迟传播机制的神秘面纱对于精确且可解释的延迟预测至关重要,这对于所有航空业利益相关者来说至关重要。主要挑战在于有效利用与延迟传播有关的时空依赖性和外源因素。但是,以前的作品仅考虑有限的时空模式,其因素很少。为了促进延迟预测的更全面的传播建模,我们提出了时空传播网络(STPN),这是一种时空可分开的图形卷积网络,在时空依赖性捕获中是新颖的。从空间关系建模的方面,我们提出了一个多画卷积模型,考虑地理位置和航空公司计划。从时间依赖性捕获的方面,我们提出了一种多头的自我发起的机制,可以端对端学习,并明确地推定延迟时间序列的多种时间依赖性。我们表明,关节空间和时间学习模型产生了Kronecker产品的总和,这是由于时空依赖性归因于几个空间和时间邻接矩阵的总和。通过这种方式,STPN允许对空间和时间因素进行串扰,以建模延迟传播。此外,将挤压和激发模块添加到STPN的每一层,以增强有意义的时空特征。为此,我们在大规模机场网络中将STPN应用于多步进和出发延迟预测。为了验证我们的模型的有效性,我们尝试了两个现实世界中的延迟数据集,包括美国和中国航班延迟;我们表明,STPN优于最先进的方法。此外,STPN产生的反事实表明,它学习了可解释的延迟传播模式。
translated by 谷歌翻译
在这项工作中,研究了在广泛的监督学位下提供稳定表现的强大学习系统的设计。我们选择图像分类问题作为一个说明性示例,并专注于由三个学习模块组成的模块化系统的设计:表示学习,特征学习和决策学习。我们讨论调整每个模块的方法,以使设计相对于不同的培训样本编号具有强大的功能。基于这些想法,我们提出了两个学习系统家庭。一个人采用定向梯度(HOG)特征的经典直方图,而另一个则使用连续的subspace-Learning(SSL)功能。我们针对MNIST和Fashion-MNIST数据集测试了他们对LENET-5的性能,这是一个端到端的优化神经网络。每个图像类别类别的训练样本数量从极度弱的监督状况(即每班标记的样本标记为1个)到强大的监督状况(即4096个标记为每类标签样本),并逐渐过渡(即$ 2^n $) ,$ n = 0,1,\ cdots,12 $)。实验结果表明,模块化学习系统的两个家族比Lenet-5具有更强的性能。对于小$ n $,它们都超过了Lenet-5的优于Lenet-5,并且具有与Lenet-5相当的性能。
translated by 谷歌翻译
机器学习对图像和视频数据的应用通常会产生高维特征空间。有效的功能选择技术确定了一个判别特征子空间,该子空间可降低计算和建模成本,而绩效很少。提出了一种新颖的监督功能选择方法,用于这项工作中的机器学习决策。所得测试分别称为分类和回归问题的判别功能测试(DFT)和相关特征测试(RFT)。 DFT和RFT程序进行了详细描述。此外,我们将DFT和RFT的有效性与几种经典特征选择方法进行了比较。为此,我们使用LENET-5为MNIST和时尚流行数据集获得的深度功能作为说明性示例。其他具有手工制作和基因表达功能的数据集也包括用于性能评估。实验结果表明,DFT和RFT可以在保持较高的决策绩效的同时明确,稳健地选择较低的尺寸特征子空间。
translated by 谷歌翻译
Projection operations are a typical computation bottleneck in online learning. In this paper, we enable projection-free online learning within the framework of Online Convex Optimization with Memory (OCO-M) -- OCO-M captures how the history of decisions affects the current outcome by allowing the online learning loss functions to depend on both current and past decisions. Particularly, we introduce the first projection-free meta-base learning algorithm with memory that minimizes dynamic regret, i.e., that minimizes the suboptimality against any sequence of time-varying decisions. We are motivated by artificial intelligence applications where autonomous agents need to adapt to time-varying environments in real-time, accounting for how past decisions affect the present. Examples of such applications are: online control of dynamical systems; statistical arbitrage; and time series prediction. The algorithm builds on the Online Frank-Wolfe (OFW) and Hedge algorithms. We demonstrate how our algorithm can be applied to the online control of linear time-varying systems in the presence of unpredictable process noise. To this end, we develop the first controller with memory and bounded dynamic regret against any optimal time-varying linear feedback control policy. We validate our algorithm in simulated scenarios of online control of linear time-invariant systems.
translated by 谷歌翻译
Several self-supervised representation learning methods have been proposed for reinforcement learning (RL) with rich observations. For real-world applications of RL, recovering underlying latent states is crucial, particularly when sensory inputs contain irrelevant and exogenous information. In this work, we study how information bottlenecks can be used to construct latent states efficiently in the presence of task-irrelevant information. We propose architectures that utilize variational and discrete information bottlenecks, coined as RepDIB, to learn structured factorized representations. Exploiting the expressiveness bought by factorized representations, we introduce a simple, yet effective, bottleneck that can be integrated with any existing self-supervised objective for RL. We demonstrate this across several online and offline RL benchmarks, along with a real robot arm task, where we find that compressed representations with RepDIB can lead to strong performance improvements, as the learned bottlenecks help predict only the relevant state while ignoring irrelevant information.
translated by 谷歌翻译